214 research outputs found

    Gluing together proof environments: Canonical extensions of LF type theories featuring locks

    Get PDF
    © F. Honsell, L. Liquori, P. Maksimovic, I. Scagnetto This work is licensed under the Creative Commons Attribution License.We present two extensions of the LF Constructive Type Theory featuring monadic locks. A lock is a monadic type construct that captures the effect of an external call to an oracle. Such calls are the basic tool for gluing together diverse Type Theories and proof development environments. The oracle can be invoked either to check that a constraint holds or to provide a suitable witness. The systems are presented in the canonical style developed by the CMU School. The first system, CLLF/p,is the canonical version of the system LLF p, presented earlier by the authors. The second system, CLLF p?, features the possibility of invoking the oracle to obtain a witness satisfying a given constraint. We discuss encodings of Fitch-Prawitz Set theory, call-by-value λ-calculi, and systems of Light Linear Logic. Finally, we show how to use Fitch-Prawitz Set Theory to define a type system that types precisely the strongly normalizing terms

    The Speed of Light and the Hubble Parameter: The Mass-Boom Effect

    Get PDF
    We prove here that Newtons universal gravitation and momentum conservation laws together reproduce Weinbergs relation. It is shown that the Hubble parameter H must be built in this relation, or equivalently the age of the Universe t. Using a wave-to-particle interaction technique we then prove that the speed of light c decreases with cosmological time, and that c is proportional to the Hubble parameter H. We see the expansion of the Universe as a local effect due to the LAB value of the speed of light co taken as constant. We present a generalized red shift law and find a predicted acceleration for photons that agrees well with the result from Pioneer 10/11 anomalous acceleration. We finally present a cosmological model coherent with the above results that we call the Mass-Boom. It has a linear increase of mass m with time as a result of the speed of light c linear decrease with time, and the conservation of momentum mc. We obtain the baryonic mass parameter equal to the curvature parameter, omega m = omega k, so that the model is of the type of the Einstein static, closed, finite, spherical, unlimited, with zero cosmological constant. This model is the cosmological view as seen by photons, neutrinos, tachyons etc. in contrast with the local view, the LAB reference. Neither dark matter nor dark energy is required by this model. With an initial constant speed of light during a short time we get inflation (an exponential expansion). This converts, during the inflation time, the Plancks fluctuation length of 10-33 cm to the present size of the Universe (about 1028 cm, constant from then on). Thereafter the Mass-Boom takes care to bring the initial values of the Universe (about 1015 gr) to the value at the present time of about 1055 gr.Comment: 15 pages, presented at the 9th Symposium on "Frontiers of Fundamental Physics", 7-9 Jan. 2008, University of Udine, Italy. Changed content

    Quantum Information and Wave function Collapse

    Full text link
    Inofrmation-theoretical restrictions on information transferred in the measurement of object S by information system O are studied. It is shown that such constraints, induced by Heisenberg commutation relations, result in the loss of information about the purity of S state. Consequently, it becomes impossible for O to discriminate pure and mixed S states. In individual events this effect is manifested by the stochastic outcomes of pure S state measurement, i.e. the collapse of pure S state.Comment: 8 pages, talk given on Simposium 'Frontiers of fundamental Physics', Udine, Italy, January 2008, to appear in Proceeding

    Weak nuclear forces cause the strong nuclear force

    Get PDF
    We determine the strength of the weak nuclear force which holds the lattices of the elementary particles together. We also determine the strength of the strong nuclear force which emanates from the sides of the nuclear lattices. The strong force is the sum of the unsaturated weak forces at the surface of the nuclear lattices. The strong force is then about ten to the power of 6 times stronger than the weak force between two lattice points.Comment: 12 pages, 1 figur

    Hidden-variable theory versus Copenhagen quantum mechanics

    Full text link
    The main assumptions the Copenhagen quantum mechanics has been based on will be summarized and the known (not yet decided) contradiction between Einstein and Bohr will be newly analyzed. The given assumptions have been represented basically by time-dependent Schroedinger equation, to which some further assumptions have been added. Some critical comments have been raised against the given mathematical model structure by Pauli (1933) and by Susskind and Glogover (1964). They may be removed if only the Schroedinger equation is conserved and the additional assumptions are abandoned, as shown recently. It seems to be in contradiction to the numerous declarations that the Copenhagen model has been approved by experimental results. However, in the most of these experiments only the agreement with the mere Schroedinger equation has been tested. All mentioned assumptions have been tested practically only in the EPR experiment (measurement of coincidence light transmission through two polarizers) proposed originally by Einstein (1935). Also these experimental results have been interpreted as supporting the Copenhagen alternative, which has not been, however, true. In fact the microscopic world may be described correspondingly only with the help of the hidden-variable theory that is represented by the Schroedinger equation without mentioned additional assumptions, which has the consequence that the earlier interpretation gap between microscopic and macroscopic worlds has been removed. The only difference concerns the existence of discrete states. The possibilities of the human reason of getting to know the nature will be also shortly discussed in the beginning of this contribution.Comment: 10 pages, 2 figures; v2: local refinements and improvements of the tex

    A dependent nominal type theory

    Full text link
    Nominal abstract syntax is an approach to representing names and binding pioneered by Gabbay and Pitts. So far nominal techniques have mostly been studied using classical logic or model theory, not type theory. Nominal extensions to simple, dependent and ML-like polymorphic languages have been studied, but decidability and normalization results have only been established for simple nominal type theories. We present a LF-style dependent type theory extended with name-abstraction types, prove soundness and decidability of beta-eta-equivalence checking, discuss adequacy and canonical forms via an example, and discuss extensions such as dependently-typed recursion and induction principles

    The involutions-as-principal types/ application-as-unification analogy

    Get PDF
    In 2005, S. Abramsky introduced various universal models of computation based on Affine Combinatory Logic, consisting of partial involutions over a suitable formal language of moves, in order to discuss reversible computation in a game-theoretic setting. We investigate Abramsky\u2019s models from the point of view of the model theory of \u3bb-calculus, focusing on the purely linear and affine fragments of Abramsky\u2019s Combinatory Algebras. Our approach stems from realizing a structural analogy, which had not been hitherto pointed out in the literature, between the partial involution interpreting a combinator and the principal type of that term, with respect to a simple types discipline for \u3bb-calculus. This analogy allows for explaining as unification between principal types the somewhat awkward linear application of involutions arising from Geometry of Interaction (GoI). Our approach provides immediately an answer to the open problem, raised by Abramsky, of characterising those finitely describable partial involutions which are denotations of combinators, in the purely affine fragment. We prove also that the (purely) linear combinatory algebra of partial involutions is a (purely) linear \u3bb-algebra, albeit not a combinatory model, while the (purely) affine combinatory algebra is not. In order to check the complex equations involved in the definition of affine \u3bb-algebra, we implement in Erlang the compilation of \u3bb-terms as involutions, and their execution

    A definitional implementation of the LAX logical framework LLFP in CoQ, for supporting fast and loose reasoning

    Get PDF
    The Lax Logical Framework, LLFP, was introduced, by a team including the last two authors, to provide a conceptual framework for integrating different proof development tools, thus allowing for external evidence and for postponing, delegating, or factoring-out side conditions. In particular, LLFP allows for reducing the number of times a proof-irrelevant check is performed. In this paper we give a shallow, actually definitional, implementation of LLFP in Coq, i.e. we use Coq both as host framework and oracle for LLFP. This illuminates the principles underpinning the mechanism of Lock-types and also suggests how to possibly extend Coq with the features of LLFP. The derived proof editor is then put to use for developing case-studies on an emerging paradigm, both at logical and implementation level, which we call fast and loose reasoning following Danielsson et alii [6]. This paradigm trades off efficiency for correctness and amounts to postponing, or running in parallel, tedious or computationally demanding checks, until we are really sure that the intended goal can be achieved. Typical examples are branch-prediction in CPUs and optimistic concurrency control

    Biological Principles in Self-Organization of Young Brain - Viewed from Kohonen Model

    Get PDF
    Variants of the Kohonen model are proposed to study biological principles of self-organization in a model of young brain. We suggest a function to measure aquired knowledge and use it to auto-adapt the topology of neuronal connectivity, yielding substantial organizational improvement relative to the standard model. In the early phase of organization with most intense learning, we observe that neural connectivity is of Small World type, which is very efficient to organize neurons in response to stimuli. In analogy to human brain where pruning of neural connectivity (and neuron cell death) occurs in early life, this feature is present also in our model, which is found to stabilize neuronal response to stimuli
    corecore